sufficient explanation
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Asia > Middle East > Jordan (0.04)
Sufficient Explanations in Databases and their Connections to Necessary Explanations and Repairs
Bertossi, Leopoldo, Pardal, Nina
The notion of cause, as formalized by Halpern and Pearl, has been recently applied to relational databases, to characterize and compute causal explanations for query answers. In this work we consider the alternative notion of sufficient explanation. We investigate its connections with database repairs as used for dealing with inconsistent databases, and with causality-based necessary explanations. We also obtain some computational results.
- South America > Chile (0.40)
- North America > Canada > Ontario > National Capital Region > Ottawa (0.40)
- North America > Dominican Republic > Azua > Azua (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- Information Technology > Data Science > Data Mining (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
Unifying Post-hoc Explanations of Knowledge Graph Completions
Lonardi, Alessandro, Badreddine, Samy, Besold, Tarek R., Martin, Pablo Sanchez
Post-hoc explainability for Knowledge Graph Completion (KGC) lacks formalization and consistent evaluations, hindering reproducibility and cross-study comparisons. This paper argues for a unified approach to post-hoc explainability in KGC. First, we propose a general framework to characterize post-hoc explanations via multi-objective optimization, balancing their effectiveness and conciseness. This unifies existing post-hoc explainability algorithms in KGC and the explanations they produce. Next, we suggest and empirically support improved evaluation protocols using popular metrics like Mean Reciprocal Rank and Hits@ k . Finally, we stress the importance of interpretability as the ability of explanations to address queries meaningful to end-users. By unifying methods and refining evaluation standards, this work aims to make research in KGC explainability more reproducible and impactful.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California (0.14)
- Europe > France > Île-de-France (0.04)
- (9 more...)
- Research Report (1.00)
- Overview (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.66)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.62)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.46)
Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations
Bassan, Shahaf, Elboher, Yizhak Yisrael, Ladner, Tobias, Althoff, Matthias, Katz, Guy
Despite significant advancements in post-hoc explainability techniques for neural networks, many current methods rely on heuristics and do not provide formally provable guarantees over the explanations provided. Recent work has shown that it is possible to obtain explanations with formal guarantees by identifying subsets of input features that are sufficient to determine that predictions remain unchanged using neural network verification techniques. Despite the appeal of these explanations, their computation faces significant scalability challenges. In this work, we address this gap by proposing a novel abstraction-refinement technique for efficiently computing provably sufficient explanations of neural network predictions. Our method abstracts the original large neural network by constructing a substantially reduced network, where a sufficient explanation of the reduced network is also provably sufficient for the original network, hence significantly speeding up the verification process. If the explanation is in sufficient on the reduced network, we iteratively refine the network size by gradually increasing it until convergence. Our experiments demonstrate that our approach enhances the efficiency of obtaining provably sufficient explanations for neural network predictions while additionally providing a fine-grained interpretation of the network's predictions across different abstraction levels.
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Germany > Berlin (0.04)
- (2 more...)
Self-Explaining Neural Networks for Business Process Monitoring
Bassan, Shahaf, Gur, Shlomit, Zeltyn, Sergey, Mavrogiorgos, Konstantinos, Eliav, Ron, Kyriazis, Dimosthenis
Tasks in Predictive Business Process Monitoring (PBPM), such as Next Activity Prediction, focus on generating useful business predictions from historical case logs. Recently, Deep Learning methods, particularly sequence-to-sequence models like Long Short-Term Memory (LSTM), have become a dominant approach for tackling these tasks. However, to enhance model transparency, build trust in the predictions, and gain a deeper understanding of business processes, it is crucial to explain the decisions made by these models. Existing explainability methods for PBPM decisions are typically *post-hoc*, meaning they provide explanations only after the model has been trained. Unfortunately, these post-hoc approaches have shown to face various challenges, including lack of faithfulness, high computational costs and a significant sensitivity to out-of-distribution samples. In this work, we introduce, to the best of our knowledge, the first *self-explaining neural network* architecture for predictive process monitoring. Our framework trains an LSTM model that not only provides predictions but also outputs a concise explanation for each prediction, while adapting the optimization objective to improve the reliability of the explanation. We first demonstrate that incorporating explainability into the training process does not hurt model performance, and in some cases, actually improves it. Additionally, we show that our method outperforms post-hoc approaches in terms of both the faithfulness of the generated explanations and substantial improvements in efficiency.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Europe > Greece (0.04)
- Asia > Middle East > Israel > Haifa District > Haifa (0.04)
- Information Technology (0.68)
- Banking & Finance (0.46)
On the Probability of Necessity and Sufficiency of Explaining Graph Neural Networks: A Lower Bound Optimization Approach
Cai, Ruichu, Zhu, Yuxuan, Chen, Xuexin, Fang, Yuan, Wu, Min, Qiao, Jie, Hao, Zhifeng
The explainability of Graph Neural Networks (GNNs) is critical to various GNN applications, yet it remains a significant challenge. A convincing explanation should be both necessary and sufficient simultaneously. However, existing GNN explaining approaches focus on only one of the two aspects, necessity or sufficiency, or a heuristic trade-off between the two. Theoretically, the Probability of Necessity and Sufficiency (PNS) holds the potential to identify the most necessary and sufficient explanation since it can mathematically quantify the necessity and sufficiency of an explanation. Nevertheless, the difficulty of obtaining PNS due to non-monotonicity and the challenge of counterfactual estimation limit its wide use. To address the non-identifiability of PNS, we resort to a lower bound of PNS that can be optimized via counterfactual estimation, and propose a framework of Necessary and Sufficient Explanation for GNN (NSEG) via optimizing that lower bound. Specifically, we depict the GNN as a structural causal model (SCM), and estimate the probability of counterfactual via the intervention under the SCM. Additionally, we leverage continuous masks with a sampling strategy to optimize the lower bound to enhance the scalability. Empirical results demonstrate that NSEG outperforms state-of-the-art methods, consistently generating the most necessary and sufficient explanations.
- Asia > Singapore (0.04)
- Asia > China > Guangdong Province > Shantou (0.04)
- North America > United States (0.04)
- (3 more...)
Explainable Representations for Relation Prediction in Knowledge Graphs
Sousa, Rita T., Silva, Sara, Pesquita, Catia
Knowledge graphs represent real-world entities and their relations in a semantically-rich structure supported by ontologies. Exploring this data with machine learning methods often relies on knowledge graph embeddings, which produce latent representations of entities that preserve structural and local graph neighbourhood properties, but sacrifice explainability. However, in tasks such as link or relation prediction, understanding which specific features better explain a relation is crucial to support complex or critical applications. We propose SEEK, a novel approach for explainable representations to support relation prediction in knowledge graphs. It is based on identifying relevant shared semantic aspects (i.e., subgraphs) between entities and learning representations for each subgraph, producing a multi-faceted and explainable representation. We evaluate SEEK on two real-world highly complex relation prediction tasks: protein-protein interaction prediction and gene-disease association prediction. Our extensive analysis using established benchmarks demonstrates that SEEK achieves significantly better performance than standard learning representation methods while identifying both sufficient and necessary explanations based on shared semantic aspects.
- Europe > Portugal > Lisbon > Lisbon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Greece > Attica > Athens (0.04)
GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks
Amara, Kenza, Ying, Rex, Zhang, Zitao, Han, Zhihao, Shan, Yinan, Brandes, Ulrik, Schemm, Sebastian, Zhang, Ce
As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Users are increasingly interested in a better understanding of GNN models and their outcomes. Unfortunately, today's evaluation frameworks for GNN explainability often rely on few inadequate synthetic datasets, leading to conclusions of limited scope due to a lack of complexity in the problem instances. As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs. In this paper, we propose, to our best knowledge, the first systematic evaluation framework for GNN explainability, considering explainability on three different "user needs". We propose a unique metric that combines the fidelity measures and classifies explanations based on their quality of being sufficient or necessary. We scope ourselves to node classification tasks and compare the most representative techniques in the field of input-level explainability for GNNs. For the inadequate but widely used synthetic benchmarks, surprisingly shallow techniques such as personalized PageRank have the best performance for a minimum computation time. But when the graph structure is more complex and nodes have meaningful features, gradient-based methods are the best according to our evaluation criteria. However, none dominates the others on all evaluation dimensions and there is always a trade-off. We further apply our evaluation protocol in a case study for frauds explanation on eBay transaction graphs to reflect the production environment.
- North America > United States > Wisconsin (0.04)
- North America > United States > Texas (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (3 more...)